Search results for "Computer Science::Computer Vision and Pattern Recognition"
showing 10 items of 193 documents
Restricted compositions and permutations: from old to new Gray codes
2011
Any Gray code for a set of combinatorial objects defines a total order relation on this set: x is less than y if and only if y occurs after x in the Gray code list. Let @? denote the order relation induced by the classical Gray code for the product set (the natural extension of the Binary Reflected Gray Code to k-ary tuples). The restriction of @? to the set of compositions and bounded compositions gives known Gray codes for those sets. Here we show that @? restricted to the set of bounded compositions of an interval yields still a Gray code. An n-composition of an interval is an n-tuple of integers whose sum lies between two integers; and the set of bounded n-compositions of an interval si…
Mass calibration of the energy axis in ToF- E elastic recoil detection analysis
2016
We report on procedures that we have developed to mass-calibrate the energy axis of ToF-E histograms in elastic recoil detection analysis. The obtained calibration parameters allow one to transform the ToF-E histogram into a calibrated ToF-M histogram.
A segmentation algorithm for noisy images
2005
International audience; This paper presents a segmentation algorithm for gray-level images and addresses issues related to its performance on noisy images. It formulates an image segmentation problem as a partition of a weighted image neighborhood hypergraph. To overcome the computational difficulty of directly solving this problem, a multilevel hypergraph partitioning has been used. To evaluate the algorithm, we have studied how noise affects the performance of the algorithm. The alpha-stable noise is considered and its effects on the algorithm are studied. Key words : graph, hypergraph, neighborhood hypergraph, multilevel hypergraph partitioning, image segmentation and noise removal.
LMI-based 2D-3D Registration: from Uncalibrated Images to Euclidean Scene
2015
International audience; This paper investigates the problem of registering a scanned scene, represented by 3D Euclidean point coordinates , and two or more uncalibrated cameras. An unknown subset of the scanned points have their image projections detected and matched across images. The proposed approach assumes the cameras only known in some arbitrary projective frame and no calibration or autocalibration is required. The devised solution is based on a Linear Matrix Inequality (LMI) framework that allows simultaneously estimating the projective transformation relating the cameras to the scene and establishing 2D-3D correspondences without triangulating image points. The proposed LMI framewo…
Visual tracking with omnidirectional cameras: an efficient approach
2011
International audience; An effective technique for applying visual tracking algorithms to omni- directional image sequences is presented. The method is based on a spherical image representation which allows taking into account the distortions and nonlinear resolution of omnidirectional images. Experimental results show that both deterministic and probabilistic tracking methods can effectively be adapted in order to robustly track an object with an omnidirectional camera.
Central catadioptric image processing with geodesic metric
2011
International audience; Because of the distortions produced by the insertion of a mirror, catadioptric images cannot be processed similarly to classical perspective images. Now, although the equivalence between such images and spherical images is well known, the use of spherical harmonic analysis often leads to image processing methods which are more difficult to implement. In this paper, we propose to define catadioptric image processing from the geodesic metric on the unitary sphere. We show that this definition allows to adapt very simply classical image processing methods. We focus more particularly on image gradient estimation, interest point detection, and matching. More generally, th…
Adapted Approach for Omnidirectional Egomotion Estimation
2011
Egomotion estimation is based principally on the estimation of the optical flow in the image. Recent research has shown that the use of omnidirectional systems with large fields of view allow overcoming the limitation presented in planar-projection imagery in order to address the problem of motion analysis. For omnidirectional images, the 2D motion is often estimated using methods developed for perspective images. This paper adapts motion field calculated using adapted method which takes into account the distortions existing in the omnidirectional image. This 2D motion field is then used as input to the egomotion estimation process using spherical representation of the motion equation. Expe…
Scale invariant line matching on the sphere
2013
International audience; This paper proposes a novel approach of line matching across images captured by different types of cameras, from perspective to omnidirectional ones. Based on the spherical mapping, this method utilizes spherical SIFT point features to boost line matching and searches line correspondences using an affine invariant measure of similarity. It permits to unify the commonest cameras and to process heterogeneous images with the least distortion of visual information.
Visual saliency detection in colour images based on density estimation
2017
International audience; A simple and effective method for visual saliency detection in colour images is presented. The method is based on the common observation that local salient regions exhibit distinct geometric and and texture patterns from neighbouring regions. We model the colour distribution of local image patches with a Gaussian density and measure the saliency of each patch as the statistical distance from that density. Experimental results with public datasets and comparison with other state-of-the-art methods show the effectiveness of our method.
SAMSLAM: Simulated Annealing Monocular SLAM
2013
This paper proposes a novel monocular SLAM approach. For a triplet of successive keyframes, the approach inteleaves the registration of the three 3D maps associated to each image pair in the triplet and the refinement of the corresponding poses, by progressively limiting the allowable reprojection error according to a simulated annealing scheme. This approach computes only local overlapping maps of almost constant size, thus avoiding problems of 3D map growth. It does not require global optimization, loop closure and back-correction of the poses.